The Possibility of Combining and Implementing Deep Neural Network Compression Methods

نویسندگان

چکیده

In the paper, possibility of combining deep neural network (DNN) model compression methods to achieve better results was considered. To compare advantages and disadvantages each method, all were applied ResNet18 for pretraining NCT-CRC-HE-100K dataset while using CRC-VAL-HE-7K as validation dataset. proposed quantization, pruning, weight clustering, QAT (quantization-aware training), preserve cluster (hereinafter PCQAT), distillation performed ResNet18. The final evaluation obtained models carried out on a Raspberry Pi 4 device greatest result disk achieved by applying PCQAT whose application led reduction in size initial much 45 times, whereas acceleration via MobileNetV2 model. All model, with slight loss accuracy or an increase case clustering. INT8 quantization knowledge also significant decrease execution time.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Universal Deep Neural Network Compression

Compression of deep neural networks (DNNs) for memoryand computation-efficient compact feature representations becomes a critical problem particularly for deployment of DNNs on resource-limited platforms. In this paper, we investigate lossy compression of DNNs by weight quantization and lossless source coding for memory-efficient inference. Whereas the previous work addressed non-universal scal...

متن کامل

scour modeling piles of kambuzia industrial city bridge using hec-ras and artificial neural network

today, scouring is one of the important topics in the river and coastal engineering so that the most destruction in the bridges is occurred due to this phenomenon. whereas the bridges are assumed as the most important connecting structures in the communications roads in the country and their importance is doubled while floodwater, thus exact design and maintenance thereof is very crucial. f...

Automated Pruning for Deep Neural Network Compression

In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is ...

متن کامل

on the comparison of keyword and semantic-context methods of learning new vocabulary meaning

the rationale behind the present study is that particular learning strategies produce more effective results when applied together. the present study tried to investigate the efficiency of the semantic-context strategy alone with a technique called, keyword method. to clarify the point, the current study seeked to find answer to the following question: are the keyword and semantic-context metho...

15 صفحه اول

Implementing the Deep Q-Network

The Deep Q-Network proposed by Mnih et al. [2015] has become a benchmark and building point for much deep reinforcement learning research. However, replicating results for complex systems is often challenging since original scientific publications are not always able to describe in detail every important parameter setting and software engineering solution. In this paper, we present results from...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Axioms

سال: 2022

ISSN: ['2075-1680']

DOI: https://doi.org/10.3390/axioms11050229